Jump to content

Talk:Field experiment

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Wiki Education Foundation-supported course assignment

[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 4 September 2018 and 21 December 2018. Further details are available on the course page. Student editor(s): Kdabug.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 21:14, 17 January 2022 (UTC)[reply]

Dr. Carpenter's comment on this article

[edit]

Dr. Carpenter has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:


First, I think it is wrong to suppose there is a trade off between control and external validity as is implied in the first paragraph. In most cases, I think there is no reason why control should be sacrificed in the field. Control simply can be thought of as a metric measuring the quality of the experiment, regardless of whether it takes place in the lab or in the field. Further, there are many ways to push on external validity in the lab (e.g., real effort) that should not be discounted. Some consideration of the Falk and Heckman Science paper should be given to make this discussion more balanced. Pre-Dufflo, the notion of a field experiment was extended to studies that use exactly the same design as in the lab but simply do it with a different population in the field (e.g., the 15 small scale societies project). Post-Dufflo, if you are not doing an RCT it is not a field experiment. This is just wrong. Under the broad category of "field experiment" should fit the RCT (which are often uninformative because the treatments one is allowed to do, e.g., information about this or that, are weak and determined by convenience or constraints that do not bind in the lab), Lab-in-the-field studies like what was mentioned above, among other things mentioned in the Harrison and Like typology.


We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

Dr. Carpenter has published scholarly research which seems to be relevant to this Wikipedia article:


  • Reference : Carpenter, Jeffrey P. & Matthews, Peter Hans, 2015. "Incentives and the Design of Charitable Fundraisers: Lessons from a Field Experiment," IZA Discussion Papers 8952, Institute for the Study of Labor (IZA).

ExpertIdeasBot (talk) 23:38, 19 May 2016 (UTC)[reply]

Dr. Viceisza's comment on this article

[edit]

Dr. Viceisza has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:


The first paragraph implies too narrow a definition of "field experiments", at least in economics. For example, the following is stated:

"A field experiment applies the scientific method to experimentally examine an intervention in the real world (or as many experimentalists like to say, naturally occurring environments) rather than in the laboratory."

This is not correct according to the taxonomy by Harrison and List (Journal of Economic Literature 2004), which is taken as canonical in experimental economics. In their taxonomy, which covers the spectrum from the lab to the field, a lab experiment is one that conventionally uses student subjects in universities and has them make decisions in an artefactual environment, i.e. a neutrally framed decision in tasks/games that are not part of the subjects' naturally-occurring/day-to-day decisionmaking environment).

But, there are different types of field experiments. The type described in this wikipedia article is closest to what Harrison and List call natural field experiments, which is also what development economists have come to refer to as randomized field experiments or randomized controlled trials (RCTs). By the way, the use of the term RCT as being synonymous with natural field experiments is misleading, since a lab experiment in which subjects are randomized to different conditions is also an RCT. That is, the term RCT in itself does not imply the use of field participants or decisionmaking in a field environment. That said, in development economics it typically does.

Returning to the broader point, what Harrison and List call artefactual and framed field experiments are also field experiments, but they do not fit with the narrow definition of the article. In other words, they can be like lab experiments in that subjects make decisions that are not part of their day-to-day environment and know they are in an experiment, but they are still field experiments in that the subjects in question are non-students such as farmers and managers of firms -- they are field subjects.

Since Harrison and List, others have also used the term extra-laboratory experiments (see for example Charness et al., Journal of Economic Behavior and Organization 2013) and lablike field experiments or lab-in-the-field experiments (see for example Viceisza, Journal of Economic Surveys forthcoming) to describe artefactual and framed field experiments. These terms try to cover those experiments that are neither pure lab nor pure field if one takes pure field to be synonymous with natural field experiments.

In short, the whole article should be rewritten to account for a more general taxonomy such as Harrison and List and the nuances that exist when moving from the lab to the field and vice versa. I cannot do so as it would take too much time. Hopefully, some of what I say hear can be copied/pasted. Also, see Viceisza (2012) for additional discussion. It should also be noted somewhere (if possible) that experimental economists and development economists tend to maintain slightly different definitions for these terms.

The section on "Field Experiments in International Development" should reflect the above spectrum as well. In other words, the examples and descriptions assume that a "field experiment" is defined as a natural field experiment (in the taxonomy of Harrison and List) aka as an RCT (in the parlance of most development economists). However, so-called extra-laboratory experiments aka lablike field experiments aka lab-in-the-field experiments are also field experiments and can very well be implemented in development economics. In fact, as Viceisza (Journal of Economic Surveys forthcoming) discusses, this is a growing area at the intersection of experimental, behavioral, and development economics. In fact, scholars at this intersection are increasingly combining different types of field experiments to better inform policy decisions.

A classic example is combining risk and time preference data (elicited by means of lablike field experiments along the lines of the already cited Binswanger) with data from a randomized intervention (aka RCT) to explore how impacts vary with participants' risk aversion. As Viceisza (forthcoming) also argues, more broadly researchers are combining experimental approaches (sometimes referred to as reduced-form methods) with structural approaches (see for example Mahajan and Tarrozzi, Working Paper 2011).

Finally, the section on "Caveats" should also be edited. In particular, the following is stated:

"Field testing is always less controlled than laboratory testing. This increases the variability of the types and magnitudes of stress in field testing. The resulting data, therefore, are more varied: larger standard deviation, less precision and accuracy, etc. This leads to the use of larger sample sizes for field testing."

As List and co-authors have argued in several of his papers (see for example his different works with Al-Ubaydli; Levitt; and Sadoff and Wagner), this need not be true. The fact that subjects in lab experiments and lablike field experiments know they are being studied in itself can be a confounding factor and thus lead to less experimental control. So, field experiments (or field testing) need not always be less controlled than laboratory testing. There may be instances where the opposite is true.

So, I would revise this to say, "Depending on the conditions, field testing may be less controlled than ..." One could also supplement this section with the counterexample I indicate above.

Bibliography:

Al-Ubaydli, O. and J. A. List (2012, March). On the generalizability of experimental results in economics. Working Paper 17957, National Bureau of Economic Research.

Charness, G., U. Gneezy, and M. A. Kuhn (2013). Experimental methods: Extra-laboratory experiments-extending the reach of experimental economics. Journal of Economic Behavior & Organization 91 (0), 93 – 100.

Harrison, G. W. and J. A. List (2004). Field experiments. Journal of Economic Literature 42 (4), 1009–1055.

Levitt, S. D. and J. A. List (2009). Field experiments in economics: The past, the present, and the future. European Economic Review 53 (1), 1–18.

List, J., S. Sadoff, and M. Wagner (2011). So you want to run an experiment, now what? some simple rules of thumb for optimal experimental design. Experimental Economics 14, 439–457.

Mahajan, A. and A. Tarozzi (2011). Time inconsistency, expectations and technology adoption: The case of insecticide treated nets. Working Paper.

Viceisza, A. C. G. (2012). Treating the Field As a Lab: A Basic Guide to Conducting Economics Experiments for Policymaking, Volume 7 of Food Security in Practice Technical Guide Series. Washington, DC: International Food Policy Research Institute (IFPRI).

Viceisza, A. C. G. (2015). Creating a Lab in the Field: Economics Experiments for Policymaking, Journal of Economic Surveys, forthcoming.


We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

Dr. Viceisza has published scholarly research which seems to be relevant to this Wikipedia article:


  • Reference : Torero, Maximo & Viceisza, Angelino, 2014. "To remit, or not to remit: that is the question. A remittance field experiment," MPRA Paper 61786, University Library of Munich, Germany.

ExpertIdeasBot (talk) 12:08, 2 June 2016 (UTC)[reply]

Dr. Cardenas's comment on this article

[edit]

Dr. Cardenas has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:


The article is a litle ambiguous on differentiating between RCTs (randomly controlled trials) and lab-in-the-field experiments. A short discussion of the potential ethical risks of ex-post effects after the experiments have been implemented in the field, particularely on the control groups that dod not receive a potentially beneficial treatment.


We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

Dr. Cardenas has published scholarly research which seems to be relevant to this Wikipedia article:


  • Reference : Cardenas, Juan-Camilo & Rodriguez, Luz Angela & Johnson, Nancy L., 2009. "Collective Action for Watershed Management: Field Experiments in Colombia and Kenya," Documentos CEDE Series 91169, Universidad de Los Andes, Economics Department.

ExpertIdeasBot (talk) 19:04, 15 June 2016 (UTC)[reply]

Dr. Baert's comment on this article

[edit]

Dr. Baert has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:


I like the introduction, but do not understand the later focus on the narrow field of "International Development Research" while field experiment are heavily used in more popular and broader fields like labour economics, e.g. to measure discrimination. See your Wiki page https://en.wikipedia.org/wiki/Employment_discrimination#By_region or the following field experimental studies.

Borjas, G. J. (1980): Promotions And Wage Discrimination In The Federal-Government - Evidence From HEW. Economics Letters, 6, 381-385.

Baert, S., De Pauw, A.-S., Deschacht, N. (2016): Do Employer Preferences Contribute to Sticky Floors? Industrial & Labor Relations Review, 69, 714-736.

Booth, A. L., Francesconi, M., Frank, J. (2003): A sticky floors model of promotion, pay, and gender. European Economic Review, 47, 295-322.


Secondly, the discussion of hiring discrimination does not reflect the state-of-the-art. The golden standard to identify discrimination in the labour market is based on randomised field experiments in which fictitious job applications, only differing in one aspect, are sent to real employers. In this respect, you might consider providing the reader with a summary of https://en.wikipedia.org/wiki/Employment_discrimination#By_region. Some references:

Albert, R., Escot, L., Fernandez-Cornejo, J. (2011): A field experiment to study sex, age discrimination in the Madrid labour market. International Journal of Human Resource Management, 22, 351–375.

Altonji, J. G., Blank, R. M. (1999): Race and gender in the labor market. Handbook of labor economics, 3, 3143–3259

Baert, S., De Pauw, A.-S., Deschacht, N. (2016): Do Employer Preferences Contribute to Sticky Floors? Industrial & Labor Relations Review, 69, 714-736.

Baert, S. (2016): Wage Subsidies and Hiring Chances for the Disabled: Some Causal Evidence. European Journal of Health Economics, 17, 71-86.

Baert, S., Omey, E. (2015): Hiring Discrimination against Pro-Union Applicants: The Role of Union Density and Firm Size. Economist, 163, 263-280.

Baert, S., Cockx, B., Gheyle, N., Vandamme, C. (2015): Is There Less Discrimination in Occupations Where Recruitment Is Difficult? Industrial & Labor Relations Review, 68, 467-500.

Baert, S., Verhofstadt, E. (2015): Labour market discrimination against former juvenile delinquents: evidence from a field experiment. Applied Economics, 47, 1061-1072.

Baert, S., Balcaen, P. (2013): The Impact of Military Work Experience on Later Hiring Chances in the Civilian Labour Market. Evidence from a Field Experiment. Economics: The Open-Access, Open-Assessment E-Journal, 7, 2013-37.

Baert, S., Norga, J., Thuy, Y., Van Hecke, M. (2015b): Getting Grey Hairs in the Labour Market. A Realistic Experiment on Age Discrimination. IZA Discussion Papers, 9289.

Banerjee, A., Bertrand, M., Datta, S., Mullainathan, S. (2009): Labor market discrimination in Delhi: Evidence from a field experiment. Journal of Comparative Economics, 37, 14–27.

Bertrand, M., Mullainathan, S. (2004): Are Emily, Greg more employable than Lakisha, Jamal? A field experiment on labor market discrimination. American Economic Review, 94, 991–1013.

Blinder, A. (1973): Wage Discrimination: Reduced Form and Structural Estimates. Journal of Human Resources, 8, 436–455.

Booth, A., Leigh, A., Varganova, E. (2012): Does ethnic discrimination vary across minority groups? Evidence from a field experiment. Oxford Bulletin of Economics and Statistics, 74, 547–573.

Carlsson, M. (2010): Experimental Evidence of Discrimination in the Hiring of First‐and Second‐generation Immigrants. Labour, 24, 263–278.

Carlsson, M. (2011): Does hiring discrimination cause gender segregation in the Swedish labor market? Feminist Economics, 17, 71–102.

Carlsson, M., Rooth, D. O. (2007): Evidence of ethnic discrimination in the Swedish labor market using experimental data. Labour Economics, 14, 716–729.

Drydakis, N. (2009): Sexual Orientation Discrimination in the Labour Market. Labour Economics, 16, 364–372.

Drydakis, N. (2011): Women’s Sexual Orientation and Labor Market Outcomes in Greece. Feminist Economics, 11, 89–117.

Drydakis, N. (2014): Sexual Orientation Discrimination in the Cypriot Labour Market. Distastes or Uncertainty? International Journal of Manpower, 35, 720–744.

Drydakis, N. (2015): Measuring Sexual Orientation Discrimination in the UK’s Labour Market; A Field Experiment. Human Relations, 68, 1769‒1796.

Drydakis, N., Vlassis, M. (2010): Ethnic Discrimination in the Greek Labour Market: Occupational Access, Insurance Coverage, and Wage Offers. Manchester School, 78, 201–218.

Lahey, J. (2008): Age, women, hiring: An experimental study. Journal of Human Resources, 43, 30–56.

Neumark, D. (2012): Detecting discrimination in audit and correspondence studies. Journal of Human Resources, 47, 1128–1157.

Oreopoulos, P. (2011): Why do skilled immigrants struggle in the labor market? A field experiment with thirteen thousand resumes. American Economic Journal: Economic Policy, 3, 148–171.

Pager, D. (2007): The use of field experiments for studies of employment discrimination: Contributions, critiques, directions for the future. Annals of the American Academy of Political and Social Sciences, 609, 104–133.

Petit, P. (2007): The effects of age, family constraints on gender hiring discrimination: A field experiment in the French financial sector. Labour Economics, 14, 371–391.

Riach, P. A., Rich, J. (2002): Field Experiments of Discrimination in the Market Place. Economic Journal 112, 480–518.

Rich, J., Fontinha, R., Lansbury, L. (2015): Do employers discriminate against trade union activists?: An experimental study of Poland, Portugal and UK. Presented at the Economics & Finance Subject Group Staff Seminar of 4 March 2015, University of Portsmouth.

Rich, J. (2014): What Do Field Experiments of Discrimination in Markets Tell Us? A Meta Analysis of Studies Conducted since 2000. IZA Discussion Paper Series, 8584.

Tilcsik, A. (2011): Pride, prejudice: Employment discrimination against openly gay men in the United States. American Journal of Sociology, 117, 586–626.

Zhou, X., Zhang, J., Song, X. (2013): Gender discrimination in hiring: Evidence from 19,130 resumes in China. Mimeo.


We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

We believe Dr. Baert has expertise on the topic of this article, since he has published relevant scholarly research:


  • Reference : Baert, Stijn & Cockx, Bart & Gheyle, Niels & Vandamme, Cora, 2013. "Do Employers Discriminate Less If Vacancies Are Difficult to Fill? Evidence from a Field Experiment," IZA Discussion Papers 7145, Institute for the Study of Labor (IZA).

ExpertIdeasBot (talk) 20:28, 1 July 2016 (UTC)[reply]

Dr. Alevy's comment on this article

[edit]

Dr. Alevy has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:


The discussion is very broad because it is not concerned only with economic experiments, however, most of the experimentalists mentioned are economists. Citation is very weak and expansion should begin with Harison and List 2004; Field Experiments; Journal of Economic Literature) which is strong on characterization of different types of field experiments in economics.

List and Metcalfe (2015; Field Experiments in the Developed World; Oxford Review of Economic Policy) complements the cited work by Esther Duflo.

The methodological handbook of Frechette and Schotter contains several important chapters on the differing contributions of lab and lab methods (2015; Handbook of Experimental Economic Methodology; Oxford).


We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

We believe Dr. Alevy has expertise on the topic of this article, since he has published relevant scholarly research:


  • Reference : Jonathan E. Alevy & Craig E. Landry & John A. List, 2011. "Field Experiments on Anchoring of Economic Valuations," Working Papers 2011-02, University of Alaska Anchorage, Department of Economics.

ExpertIdeasBot (talk) 17:06, 27 July 2016 (UTC)[reply]

Dr. List's comment on this article

[edit]

Dr. List has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:


I frequently attribute the first field experiments in economics to Heather Ross, whose work on the negative income tax was very influential. Otherwise this is very good.


We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

We believe Dr. List has expertise on the topic of this article, since he has published relevant scholarly research:


  • Reference 1: Rondeau, Daniel & List, John A., 2008. "Matching and Challenge Gifts to Charity: Evidence from Laboratory and Natural Field Experiments," IZA Discussion Papers 3278, Institute for the Study of Labor (IZA).
  • Reference 2: Houser, Daniel & List, John A. & Piovesan, Marco & Samek, Anya & Winter, Joachim K., 2015. "On the Origins of Dishonesty: From Parents to Children," IZA Discussion Papers 8906, Institute for the Study of Labor (IZA).

ExpertIdeasBot (talk) 22:04, 15 August 2016 (UTC)[reply]

Dr. Dur's comment on this article

[edit]

Dr. Dur has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:


Economists have used field experiments to analyze discrimination, health care programs, charitable fundraising, education, information aggregation in markets, and microfinance programs. <--It might be useful to add a footnote with a reference here to this article: John A. List (2011), Why Economists Should Conduct Field Experiments and 14 Tips for Pulling One Off, Journal of Economic Perspectives, Volume 25, Number 3, Summer 2011, Pages 3–16 (open access: https://www.aeaweb.org/articles?id=10.1257/jep.25.3.3).


We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

We believe Dr. Dur has expertise on the topic of this article, since he has published relevant scholarly research:


  • Reference : Delfgaauw, Josse & Dur, Robert & Non, Arjan & Verbeke, Willem, 2013. "Dynamic Incentive Effects of Relative Performance Pay: A Field Experiment," IZA Discussion Papers 7652, Institute for the Study of Labor (IZA).

ExpertIdeasBot (talk) 15:50, 24 August 2016 (UTC)[reply]